# Pckgs -------------------------------------
library(fs) # Cross-Platform File System Operations Based on 'libuv'
library(tidyverse) # Easily Install and Load the 'Tidyverse'
library(janitor) # Simple Tools for Examining and Cleaning Dirty Data
library(skimr) # Compact and Flexible Summaries of Data
library(here) # A Simpler Way to Find Your Files
library(paint) # paint data.frames summaries in colour
library(readxl) # Read Excel Files
library(tidytext) # Text Mining using 'dplyr', 'ggplot2', and Other Tidy Tools
library(SnowballC) # Snowball Stemmers Based on the C 'libstemmer' UTF-8 Library
library(rsample) # General Resampling Infrastructure
library(rvest) # Easily Harvest (Scrape) Web Pages
library(cleanNLP) # A Tidy Data Model for Natural Language Processing
library(kableExtra) # Construct Complex Table with 'kable' and Pipe Syntax)WB Project PDO text analysis
Work in progress
Set up
— Note on cleanNLP package
cleanNLP supports multiple backends for processing text, such as CoreNLP, spaCy, udpipe, and stanza. Each of these backends has different capabilities and might require different initialization procedures.
-
CoreNLP~ powerful Java-based NLP toolkit developed by Stanford, which includes many linguistic tools like tokenization, part-of-speech tagging, and named entity recognition.- ❕❗️ NEEDS EXTERNAL INSTALLATION (must be installed in Java with
cnlp_install_corenlp()which installs the Java JAR files and models)
- ❕❗️ NEEDS EXTERNAL INSTALLATION (must be installed in Java with
-
spaCy~ fast and modern NLP library written in Python. It provides advanced features like dependency parsing, named entity recognition, and tokenization.- ❕❗️ NEEDS EXTERNAL INSTALLATION (fust be installed in Python (with
spacy_install()which installs bothspaCyand necessary Python dependencies) and thespacyrR package must be installed to interface with it.
- ❕❗️ NEEDS EXTERNAL INSTALLATION (fust be installed in Python (with
-
udpipe~ R package that provides bindings to theUDPipeNLP toolkit. Fast, lightweight and language-agnostic NLP library for tokenization, part-of-speech tagging, lemmatization, and dependency parsing. -
stanza~ another modern NLP library from Stanford, similar to CoreNLP but built on PyTorch and supports over 66 languages…
when you initialize a backend (like CoreNLP) in cleanNLP, it stays active for the entire session unless you reinitialize or explicitly change it.
# ---- 1) Initialize the CoreNLP backend
library(cleanNLP)
cnlp_init_corenlp()
# If you want to specify a language or model path:
cnlp_init_corenlp(language = "en",
# model_path = "/path/to/corenlp-models"
)
# ---- 2) Initialize the spaCy backend
library(cleanNLP)
library(spacyr)
# Initialize spaCy in cleanNLP
cnlp_init_spacy()
# Optional: specify language model
cnlp_init_spacy(model_name = "en_core_web_sm")
# ---- 3) Initialize the udpipe backend
library(cleanNLP)
# Initialize udpipe backend
cnlp_init_udpipe(model_name = "english")
# ---- 4) Initialize the stanza backend—————————————————————————-
Data sources
WB Projects & Operations
World Bank Projects & Operations can be explored at:
- Data Catalog. From which
Accessibility Classification: public under Creative Commons Attribution 4.0
For example: https://datacatalog.worldbank.org/search/dataset/0037800 https://datacatalog.worldbank.org/search/dataset/0037800/World-Bank-Projects---Operations
—————————————————————————
Load pre-processed Projs’ PDO dataset pdo_train_t
[Saved file projs_train_t ]
Done in ** analysis/_01a_WB_project_pdo_prep.qmd “**
- I retrieved manually ALL WB projects approved between FY 1947 and 2026 as of 31/08/2024 using simply the
Excel buttonon this page WBG Projects- By the way this is the link “list-download-excel”
- then saved HUUUGE
.xlsfiles indata/raw_data/project2/all_projects_as_of29ago2024.xls- (plus a
Rdatacopy of the original file )
- (plus a
- Split the dataset and keep only
projs_train(50% of projects with PDO text, i.e. 4413 PDOs) - Clean the dataset and save
projs_train_t(cleaned train dataset) - Obtain PoS tagging + tokenization with
cleanNLPpackage (functionscnlp_init_udpipe()+cnlp_annotate()) and savedprojs_train_t(cleaned train dataset).
Important mod
# Ensure token_id is numeric
pdo_train_t <- pdo_train_t %>%
mutate(tid = as.numeric(tid)) # Convert token_id to numericExplain Tokenization and PoS Tagging
i) Tokenization
Where a word is more abstract, a “type” is a concrete term used in actual language, and a “token” is the particular instance we’re interested in (e.g. abstract things (‘wizards’) and individual instances of the thing (‘Harry Potter.’). Breaking a piece of text into words is thus called “tokenization”, and it can be done in many ways.
The choices of tokenization
- Should words be lower cased?
- Should punctuation be removed?
- Should numbers be replaced by some placeholder?
- Should words be stemmed (also called lemmatization). ☑️
- Should bigrams/multi-word phrase be used instead of single word phrases?
- Should stopwords (the most common words) be removed? ☑️
- Should rare words be removed?
- Should hyphenated words be split into two words? ❌
for the moment I keep all as conservatively as possible
ii) Pos Tagging
Classifying noun, verb, adjective, etc. can help discover intent or action in a sentence, or scanning “verb-noun” patterns. Here I have a training dataset file with:
| Variable | Type | Provenance | Description |
|---|---|---|---|
| proj_id | chr | original PDO data | |
| pdo | chr | original PDO data | |
| word_original | chr | original PDO data | |
| sid | int | output cleanNLP | sentence ID |
| tid | chr | output cleanNLP | token ID within sentence |
| token | chr | output cleanNLP | Tokenized form of the token. |
| token_with_ws | chr | output cleanNLP | Token with trailing whitespace |
| lemma | chr | output cleanNLP | The base form of the token |
| upos | chr | output cleanNLP | Universal part-of-speech tag (e.g., NOUN, VERB, ADJ). |
| xpos | chr | output cleanNLP | Language-specific part-of-speech tags. |
| feats | chr | output cleanNLP | Morphological features of the token |
| tid_source | chr | output cleanNLP | Token ID in the source document |
| relation | chr | output cleanNLP | Dependency relation between the token and its head token |
| pr_name | chr | output cleanNLP | Name of the parent token |
| FY_appr | dbl | original PDO data | |
| FY_clos | dbl | original PDO data | |
| status | chr | original PDO data | |
| regionname | chr | original PDO data | |
| countryname | chr | original PDO data | |
| sector1 | chr | original PDO data | |
| theme1 | chr | original PDO data | |
| lendinginstr | chr | original PDO data | |
| env_cat | chr | original PDO data | |
| ESrisk | chr | original PDO data | |
| curr_total_commitment | dbl | original PDO data |
— PoS Tagging: upos (Universal Part-of-Speech)
| upos | n | percent | explan |
|---|---|---|---|
| ADJ | 21852 | 0.0853714 | Adjective |
| ADP | 27848 | 0.1087965 | Adposition |
| ADV | 3010 | 0.0117595 | Adverb |
| AUX | 3738 | 0.0146036 | Auxiliary |
| CCONJ | 14486 | 0.0565939 | Coordinating conjunction |
| DET | 22121 | 0.0864223 | Determiner |
| INTJ | 81 | 0.0003165 | Interjection |
| NOUN | 72668 | 0.2838993 | Noun |
| NUM | 2285 | 0.0089270 | Numeral |
| PART | 8846 | 0.0345595 | Particle |
| PRON | 2351 | 0.0091849 | Pronoun |
| PROPN | 14860 | 0.0580550 | Proper noun |
| PUNCT | 29442 | 0.1150240 | Punctuation |
| SCONJ | 2219 | 0.0086692 | Subordinating conjunction |
| SYM | 348 | 0.0013596 | Symbol |
| VERB | 26397 | 0.1031278 | Verb |
| X | 3412 | 0.0133300 | Other |
iii) Make low case
pdo_train_t <- pdo_train_t %>%
mutate(token_l = tolower(token)) %>%
relocate(token_l, .after = token) %>%
select(-token_with_ws) %>%
#Replace variations of "hyphenword" with "-"
mutate(
lemma = str_replace_all(lemma, regex("hyphenword|hyphenwor", ignore_case = TRUE), "-")
) %>%
mutate(stem = wordStem(token_l)) %>%
relocate(stem, .after = lemma)iv) Stemming
_______
TEXT ANALYSIS/SUMMARY
_______
NOTE: Among word / stems encountered in PDOs, there are a lot of acronyms which may refer to World Bank lingo, or local agencies, etc… Especially when looked at in low case form they don’t make much sense…
see https://cengel.github.io/R-text-analysis/textanalysis.html
Frequencies of documents/words/stems
# Count words
counts_pdo <- pdo_train_t %>%
count(pdo, sort = TRUE) # 4,071
counts_words <- pdo_train_t %>%
count(word_original, sort = TRUE) # 13,441
counts_token <- pdo_train_t %>%
count(token, sort = TRUE) # 13,420
counts_lemma <- pdo_train_t %>%
count(lemma, sort = TRUE) # 11,705
counts_stem <- pdo_train_t %>%
count(stem, sort = TRUE) # 8,812We are looking at pdo_train_t which has 134,858 rows and 7 columns.
- PDOs = 4071 in projects
- ranging from 2001 to 2023
| Column1 | Column2 |
|---|---|
| N proj | 4413 |
| N PDOs | 4071 |
| N words | 13231 |
| N token | 11399 |
| N lemma | 11474 |
| N stem | 8812 |
We are looking at pdo_train_t which has 134,858 rows and 7 columns.
[FUNC] save plots
f_save_plot <- function(plot_name, plot_object) {
# Print the plot, save as PDF and PNG
plot_object %T>%
print() %T>%
ggsave(., filename = here("analysis", "output", "figures", paste0(plot_name, ".pdf")),
# width = 4, height = 2.25, units = "in",
device = cairo_pdf) %>%
ggsave(., filename = here("analysis", "output", "figures", paste0(plot_name, ".png")),
# width = 4, height = 2.25, units = "in",
type = "cairo", dpi = 300)
}
# Example of using the function
# f_save_plot("proj_wrd_freq", proj_wrd_freq)Term frequency
[FIG] Overall token freq ggplot
- Without “project” “develop”,“objective”
# Evaluate the title with glue first
title_text <- glue::glue("Most frequent token in {n_distinct(pdo_train_t$proj_id)} PDOs from projects approved between FY {min(pdo_train_t$boardapprovalFY)} and {max(pdo_train_t$boardapprovalFY)}")
proj_wrd_freq <- pdo_train_t %>% # 256,632
filter (!(upos %in% c("AUX","CCONJ", "INTJ", "DET", "PART","ADP", "SCONJ", "SYM", "PART", "PUNCT"))) %>%
filter (!(relation %in% c("nummod" ))) %>% # 173,686
filter (!(token_l %in% c("pdo","project", "development", "objective","objectives", "i", "ii", "iii",
"is"))) %>% # whne it is VERB
count(token_l) %>%
filter(n > 800) %>%
mutate(token_l = reorder(token_l, n)) %>% # reorder values by frequency
# plot
ggplot(aes(token_l, n)) +
geom_col(fill = "gray") +
coord_flip() + # flip x and y coordinates so we can read the words better
labs(title = title_text,
subtitle = "[token_l count > 800]", y = "", x = "")
proj_wrd_freqf_save_plot("proj_wrd_freq", proj_wrd_freq)[FIG] Overall stem freq ggplot
- Without “project” “develop”,“objective”
# Evaluate the title with glue first
title_text <- glue::glue("Most frequent STEM in {n_distinct(pdo_train_t$proj_id)} PDOs from projects approved between FY {min(pdo_train_t$boardapprovalFY)} and {max(pdo_train_t$boardapprovalFY)}")
# Plot
proj_stem_freq <- pdo_train_t %>% # 256,632
filter (!(upos %in% c("AUX","CCONJ", "INTJ", "DET", "PART","ADP", "SCONJ", "SYM", "PART", "PUNCT"))) %>%
filter (!(relation %in% c("nummod" ))) %>% # 173,686
filter (!(stem %in% c("pdo","project", "develop", "object", "i", "ii", "iii"))) %>%
count(stem) %>%
filter(n > 800) %>%
mutate(stem = reorder(stem, n)) %>% # reorder values by frequency
# plot
ggplot(aes(stem, n)) +
geom_col(fill = "gray") +
coord_flip() + # flip x and y coordinates so we can read the words better
labs(title = title_text,
subtitle = "[stem count > 800]", y = "", x = "")
proj_stem_freqf_save_plot("proj_stem_freq", proj_stem_freq)Evidently, after stemming, more words (or stems) reach the threshold frequency count of 800.
_______
>>>>>> QUI <<<<<<<<<<<<<<<<<<
Main ref https://www.nlpdemystified.org/course/advanced-preprocessing rivedere cos’avevo fatto x pulire in analysis//03_WDR_pdotracs_explor.qmd https://cengel.github.io/R-text-analysis/textprep.html#detecting-patterns https://guides.library.upenn.edu/penntdm/r https://smltar.com/stemming#how-to-stem-text-in-r BOOK STEMMING # _______
… [FIG] Most frequent bigrams ggplot
- development objective
- development objectives
- Project Development
- private sector ~ 200
- institutional capacity
- service delivery
- improve access
- Development Objectives
- water supply
- proposed project
- public sector ~ 150
- climate change
- investment climate
… [FIG] Notable bigrams (climate change)!
Word and document frequency: Tf-idf
The goal is to quantify what a document is about. What is the document about?
- term frequency (tf) = how frequently a word occurs in a document… but there are words that occur many time and are not important
- term’s inverse document frequency (idf) = decreases the weight for commonly used words and increases the weight for words that are not used very much in a collection of documents.
- statistic tf-idf (= tf-idf) = an alternative to using stopwords is the frequency of a term adjusted for how rarely it is used. [It measures how important a word is to a document in a collection (or corpus) of documents, but it is still a rule-of-thumb or heuristic quantity]
The tf-idf is the product of the term frequency and the inverse document frequency::
N-Grams
…
Co-occurrence
…
_______
>>>>>> NEXT <<<<<<<<<<<<<<<<<<
_______
Named Entity Recognition using CleanNLP and spaCy
NER is especially useful for analyzing unstructured text.
— Summarise the tokens by parts of speech
# Initialize the spacy backend
cnlp_init_spacy() quarto render analysis/01b_WB_project_pdo_anal.qmd --to html
open ./docs/analysis/01b_WB_project_pdo_anal.html